The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
深度学习(DL)模型为各种医学成像基准挑战提供了最先进的性能,包括脑肿瘤细分(BRATS)挑战。然而,局灶性病理多隔室分割(例如,肿瘤和病变子区)的任务特别具有挑战性,并且潜在的错误阻碍DL模型转化为临床工作流程。量化不确定形式的DL模型预测的可靠性,可以实现最不确定的地区的临床审查,从而建立信任并铺平临床翻译。最近,已经引入了许多不确定性估计方法,用于DL医学图像分割任务。开发指标评估和比较不确定性措施的表现将有助于最终用户制定更明智的决策。在本研究中,我们探索并评估在Brats 2019-2020任务期间开发的公制,以对不确定量化量化(Qu-Brats),并旨在评估和排列脑肿瘤多隔室分割的不确定性估计。该公制(1)奖励不确定性估计,对正确断言产生高置信度,以及在不正确的断言处分配低置信水平的估计数,(2)惩罚导致更高百分比的无关正确断言百分比的不确定性措施。我们进一步基准测试由14个独立参与的Qu-Brats 2020的分割不确定性,所有这些都参与了主要的Brats细分任务。总体而言,我们的研究结果证实了不确定性估计提供了分割算法的重要性和互补价值,因此突出了医学图像分析中不确定性量化的需求。我们的评估代码在HTTPS://github.com/ragmeh11/qu-brats公开提供。
translated by 谷歌翻译
视觉和语言(V+L)模型的最新进展对医疗保健领域产生了有希望的影响。但是,这样的模型难以解释如何以及为什么做出特定决定。此外,模型透明度和域专业知识的参与是机器学习模型进入该领域的关键成功因素。在这项工作中,我们研究了局部替代解释性技术来克服黑盒深度学习模型的问题。我们探讨了使用本地替代物与基础V+L结合使用本地替代物与域专业知识相似的可行性,以生成多模式的视觉和语言解释。我们证明,这种解释可以作为指导该领域数据科学家和机器学习工程师的指导模型培训的有益反馈。
translated by 谷歌翻译
离线手写数学表达识别(HMER)是数学表达识别领域的主要领域。与在线HMER相比,由于缺乏时间信息和写作风格的可变性,离线HMER通常被认为是一个更困难的问题。在本文中,我们目的是使用配对对手学习的编码器模型。语义不变的特征是从手写数学表达图像及其编码器中的印刷数学表达式中提取的。学习语义不变的特征与Densenet编码器和变压器解码器相结合,帮助我们提高了先前研究的表达率。在Crohme数据集上进行了评估,我们已经能够将最新的Crohme 2019测试集结果提高4%。
translated by 谷歌翻译
近年来,在全球范围内解决了强大的智能运输系统(ITS)的开发,以通过减少频繁的交通问题来提高交通效率。作为其应用,车辆的重新识别对计算机视觉和机器人技术的领域产生了充足的兴趣。开发了基于卷积的神经网络(CNN)方法来执行车辆重新识别,以应对诸如遮挡,照明变化,规模等的关键挑战。计算机视觉中变形金刚的进步已经为进一步探索重新识别流程提供了机会提高性能。在本文中,开发了一个框架来执行跨CCTV摄像机的车辆的重新识别。为了进行重新识别,提出的框架将使用CNN和变压器模型学习的车辆表示。该框架在一个数据集上进行了测试,该数据集包含在20个CCTV摄像机上观察到的81个独特的车辆身份。从实验中,融合的车辆重新识别框架的地图为61.73%,与独立的CNN或变压器模型相比,该框架的地图明显更好。
translated by 谷歌翻译
我们介绍了自回归文本到图像(Parti)模型的途径,该模型生成高保真的影像图像并支持涉及复杂组成和世界知识的内容丰富的合成。 Parti将文本对图像生成视为类似于机器翻译的序列到序列建模问题,图像令牌的序列是目标输出,而不是其他语言的文本令牌。这种策略自然可以利用大型语言模型的先前工作,通过扩展数据和模型尺寸,能力和性能的持续进展。我们的方法很简单:首先,Parti使用基于变压器的图像令牌VIT-VQGAN将图像编码为离散令牌的序列。其次,我们通过将编码器二次变压器模型缩放到20B参数来实现一致的质量改进,其新的最新零弹药FID得分为7.23,而MS-Coco的FIDED得分为3.22。我们对本地化叙述以及党的详细分析(P2),这是1600多个英语提示的新的整体基准,证明了Parti在各种类别和难度方面的有效性。我们还探索并突出了我们的模型的局限性,以定义和体现关注重点领域以进一步改进。有关高分辨率图像,请参见https://parti.research.google/。
translated by 谷歌翻译
大多数关于行人姿势估计的现有作品都不考虑估计被阻塞的行人的姿势,因为相关的汽车数据集中没有遮挡零件的注释。例如,在汽车场景中用于行人检测的众所周知的数据集Citypersons不提供姿势注释,而MS-Coco(一种非自动动物数据集)包含人体姿势估计。在这项工作中,我们提出了一个多任务框架,以通过检测和实例分割任务在这两个分布上执行。此后,编码器使用两个分布的行人实例使用无监督的实例级适应方法来学习姿势特定的特征。提出的框架改善了姿势估计,行人检测和实例分割的最新性能。
translated by 谷歌翻译
森林在减少温室气体排放和缓解气候变化方面起着至关重要的作用,除了维持世界的生物多样性。现有的基于卫星的森林监测系统利用受监督的学习方法仅限于特定区域,并依靠手动注释的数据来识别森林。这项工作将森林识别视为几乎没有语义分割任务,以实现在不同地理区域的概括。提出的几片分段方法包含了原型网络中的纹理注意模块,以突出森林的质地特征。的确,森林表现出与其他类别不同的​​特征性质地,例如道路,水等。在这项工作中,拟议的方法经过培训,可以识别南亚的热带森林,并在帮助下改编以确定中欧的温带森林。在温带森林的一些(1张图像)手动注释的支撑图像中。使用所提出的方法获得了森林类别(1向1射击)的0.62的IOU,该方法比现有的几弹性语义分割方法高得多(锅盘为0.46)。该结果表明,所提出的方法可以概括整个地理区域以进行森林识别,从而创造了开发全球森林覆盖识别工具的机会。
translated by 谷歌翻译
Code review is an integral part of any mature software development process, and identifying the best reviewer for a code change is a well accepted problem within the software engineering community. Selecting a reviewer who lacks expertise and understanding can slow development or result in more defects. To date, most reviewer recommendation systems rely primarily on historical file change and review information; those who changed or reviewed a file in the past are the best positioned to review in the future. We posit that while these approaches are able to identify and suggest qualified reviewers, they may be blind to reviewers who have the needed expertise and have simply never interacted with the changed files before. To address this, we present CORAL, a novel approach to reviewer recommendation that leverages a socio-technical graph built from the rich set of entities (developers, repositories, files, pull requests, work-items, etc.) and their relationships in modern source code management systems. We employ a graph convolutional neural network on this graph and train it on two and a half years of history on 332 repositories. We show that CORAL is able to model the manual history of reviewer selection remarkably well. Further, based on an extensive user study, we demonstrate that this approach identifies relevant and qualified reviewers who traditional reviewer recommenders miss, and that these developers desire to be included in the review process. Finally, we find that "classical" reviewer recommendation systems perform better on smaller (in terms of developers) software projects while CORAL excels on larger projects, suggesting that there is "no one model to rule them all."
translated by 谷歌翻译